187. ECK的filebeat
前言
一般filebeat,我之前都是做成daemonset,
直接抓標準輸出,傳去elasticserach裏面。
但目前,他們還是習慣寫文字檔到特定目錄,
filebeat再去那個目錄抓資料寫入,
所以變成也要掛載一個readwritemany的nfs。
那邊設定我還有點困惑,還需要測試。
正文
Elasticsearch 8.* 版本後,都要求加密憑證,
所以要用http傳很麻煩,
可能還要限定版本,最後我也放棄了。
使用ECK的好處是不用特別設定ca,
但filebeat這邊的設定就要指定kibana跟elasticsearch了
但在不同的ns,故需特別指定ns。
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.11.0
elasticsearchRef:
name: fixed
namespace: elastic-system
kibanaRef:
name: fixed
namespace: elastic-system
config:
filebeat.inputs:
#websocketclient
- type: log
ignore_older: 24h
enabled: true
paths:
- /var/log/app-logs/websocketclient/*.log
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
tags: ["prod-sms-back-websocketclient"]
#filebeat_self
- type: log
ignore_older: 24h
enabled: true
paths:
- /var/log/filebeat/*.ndjson
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
tags: ["filebeat_self"]
processors:
- timestamp:
field: LoggingTime
layouts:
- '2006-01-02T15:04:05Z'
- '2006-01-02T15:04:05.999Z'
- '2006-01-02T15:04:05.999-07:00'
test:
- '2019-06-22T16:33:51Z'
- '2019-11-18T04:59:51.123Z'
- '2020-08-03T07:10:20.123456+02:00'
- add_locale:
format: offset
- drop_fields:
fields: ["agent", "input", "host", "log", "ecs", "data_stream", "event.timezone", "LoggingTime"]
output.elasticsearch:
username: "elastic"
password: "abc"
indices:
#websocketclient
- index: "prod-sms-back-websocketclient-%{+yyyy.MM.dd}"
when.contains:
tags: "prod-sms-back-websocketclient"
#filebeat_self
- index: "filebeat_self-%{+yyyy.MM.dd}"
when.contains:
tags: "filebeat_self"
logging:
metrics.enabled: false
level: info
to_files: true
files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
deployment:
podTemplate:
spec:
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: applog
mountPath: /var/log/app-logs
volumes:
- name: applog
persistentVolumeClaim:
claimName: logs-nfs-test-pvc
備註一下,完整ECK安裝的yaml
# 1. setup eck operator and crd
# kubectl create -f https://download.elastic.co/downloads/eck/2.10.0/crds.yaml
# kubectl apply -f https://download.elastic.co/downloads/eck/2.10.0/operator.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: fixed
# name: yabo
namespace: elastic-system
spec:
version: 8.11.0
nodeSets:
- name: all
count: 1
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
resources:
requests:
memory: 4Gi
limits:
memory: 4Gi
config:
node.roles:
- master
- data
- ingest
node.attr.attr_name: attr_value
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
# 儲存空間需要加大,預設只有1G 不夠
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: fixed
namespace: elastic-system
spec:
version: 8.11.0
count: 1
elasticsearchRef:
name: fixed
http:
tls:
selfSignedCertificate:
disabled: true
service:
spec:
ports:
- name: http
port: 5601
targetPort: 5601
---
ref. ECK部署